Stein Unbiased GrAdient estimator of the Risk (SUGAR) for Multiple Parameter Selection

نویسندگان

  • Charles-Alban Deledalle
  • Samuel Vaiter
  • Mohamed-Jalal Fadili
  • Gabriel Peyré
چکیده

Algorithms to solve variational regularization of ill-posed inverse problems usually involve operators that depend on a collection of continuous parameters. When these operators enjoy some (local) regularity, these parameters can be selected using the socalled Stein Unbiased Risk Estimate (SURE). While this selection is usually performed by exhaustive search, we address in this work the problem of using the SURE to efficiently optimize for a collection of continuous parameters of the model. When considering non-smooth regularizers, such as the popular l1-norm corresponding to soft-thresholding mapping, the SURE is a discontinuous function of the parameters preventing the use of gradient descent optimization techniques. Instead, we focus on an approximation of the SURE based on finite differences as proposed in [51]. Under mild assumptions on the estimation mapping, we show that this approximation is a weakly differentiable function of the parameters and its weak gradient, coined the Stein Unbiased GrAdient estimator of the Risk (SUGAR), provides an asymptotically (with respect to the data dimension) unbiased estimate of the gradient of the risk. Moreover, in the particular case of softthresholding, it is proved to be also a consistent estimator. This gradient estimate can then be used as a basis to perform a quasi-Newton optimization. The computation of the SUGAR relies on the closed-form (weak) differentiation of the non-smooth function. We provide its expression for a large class of iterative methods including proximal splitting ones and apply our strategy to regularizations involving non-smooth convex structured penalties. Illustrations on various image restoration and matrix completion problems are given.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Classic and Bayes Shrinkage Estimation in Rayleigh Distribution Using a Point Guess Based on Censored Data

Introduction      In classical methods of statistics, the parameter of interest is estimated based on a random sample using natural estimators such as maximum likelihood or unbiased estimators (sample information). In practice,  the researcher has a prior information about the parameter in the form of a point guess value. Information in the guess value is called as nonsample information. Thomp...

متن کامل

Smooth James-Stein model selection against erratic Stein unbiased risk estimate to select several regularization parameters

Smooth James-Stein thresholding-based estimators enjoy smoothness like ridge regression and perform variable selection like lasso. They have added flexibility thanks to more than one regularization parameters (like adaptive lasso), and the ability to select these parameters well thanks to a unbiased and smooth estimation of the risk. The motivation is a gravitational wave burst detection proble...

متن کامل

On Concomitants of Order Statistics from Farlie-Gumbel-Morgenstern Bivariate Lomax Distribution and its Application in Estimation

‎In this paper‎, ‎we have dealt with the distribution theory of concomitants of order statistics arising from Farlie-Gumbel-Morgenstern bivariate Lomax distribution‎. ‎We have discussed the estimation of the parameters associated with the distribution of the variable Y of primary interest‎, ‎based on the ranked set sample defined by ordering the marginal observations...

متن کامل

ESTIMATORS BASED ON FUZZY RANDOM VARIABLES AND THEIR MATHEMATICAL PROPERTIES

In statistical inference, the point estimation problem is very crucial and has a wide range of applications. When, we deal with some concepts such as random variables, the parameters of interest and estimates may be reported/observed as imprecise. Therefore, the theory of fuzzy sets plays an important role in formulating such situations. In this paper, we rst recall the crisp uniformly minimum ...

متن کامل

Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE?

Nearly all estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the estimator; we focus on Stein’s unbiased risk estimator, or SURE (Stein, 1981; Efron, 1986), which forms an unbiased estimate of the prediction err...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM J. Imaging Sciences

دوره 7  شماره 

صفحات  -

تاریخ انتشار 2014